269 research outputs found

    Wavelets and Fast Numerical Algorithms

    Full text link
    Wavelet based algorithms in numerical analysis are similar to other transform methods in that vectors and operators are expanded into a basis and the computations take place in this new system of coordinates. However, due to the recursive definition of wavelets, their controllable localization in both space and wave number (time and frequency) domains, and the vanishing moments property, wavelet based algorithms exhibit new and important properties. For example, the multiresolution structure of the wavelet expansions brings about an efficient organization of transformations on a given scale and of interactions between different neighbouring scales. Moreover, wide classes of operators which naively would require a full (dense) matrix for their numerical description, have sparse representations in wavelet bases. For these operators sparse representations lead to fast numerical algorithms, and thus address a critical numerical issue. We note that wavelet based algorithms provide a systematic generalization of the Fast Multipole Method (FMM) and its descendents. These topics will be the subject of the lecture. Starting from the notion of multiresolution analysis, we will consider the so-called non-standard form (which achieves decoupling among the scales) and the associated fast numerical algorithms. Examples of non-standard forms of several basic operators (e.g. derivatives) will be computed explicitly.Comment: 32 pages, uuencoded tar-compressed LaTeX file. Uses epsf.sty (see `macros'

    Fast and accurate con-eigenvalue algorithm for optimal rational approximations

    Full text link
    The need to compute small con-eigenvalues and the associated con-eigenvectors of positive-definite Cauchy matrices naturally arises when constructing rational approximations with a (near) optimally small LL^{\infty} error. Specifically, given a rational function with nn poles in the unit disk, a rational approximation with mnm\ll n poles in the unit disk may be obtained from the mmth con-eigenvector of an n×nn\times n Cauchy matrix, where the associated con-eigenvalue λm>0\lambda_{m}>0 gives the approximation error in the LL^{\infty} norm. Unfortunately, standard algorithms do not accurately compute small con-eigenvalues (and the associated con-eigenvectors) and, in particular, yield few or no correct digits for con-eigenvalues smaller than the machine roundoff. We develop a fast and accurate algorithm for computing con-eigenvalues and con-eigenvectors of positive-definite Cauchy matrices, yielding even the tiniest con-eigenvalues with high relative accuracy. The algorithm computes the mmth con-eigenvalue in O(m2n)\mathcal{O}(m^{2}n) operations and, since the con-eigenvalues of positive-definite Cauchy matrices decay exponentially fast, we obtain (near) optimal rational approximations in O(n(logδ1)2)\mathcal{O}(n(\log\delta^{-1})^{2}) operations, where δ\delta is the approximation error in the LL^{\infty} norm. We derive error bounds demonstrating high relative accuracy of the computed con-eigenvalues and the high accuracy of the unit con-eigenvectors. We also provide examples of using the algorithm to compute (near) optimal rational approximations of functions with singularities and sharp transitions, where approximation errors close to machine precision are obtained. Finally, we present numerical tests on random (complex-valued) Cauchy matrices to show that the algorithm computes all the con-eigenvalues and con-eigenvectors with nearly full precision

    LU Factorization of Non-standard Forms and Direct Multiresolution Solvers

    Get PDF
    AbstractIn this paper we introduce the multiresolution LU factorization of non-standard forms (NS-forms) and develop fastdirect multiresolutionmethods for solving systems of linear algebraic equations arising in elliptic problems.The NS-form has been shown to provide a sparse representation for a wide class of operators, including those arising in strictly elliptic problems. For example, Green's functions of such operators (which are ordinarily represented by dense matrices, e.g., of sizeNbyN) may be represented by −log ϵ·Ncoefficients, where ϵ is the desired accuracy.The NS-form is not an ordinary[fn9] matrix representation and the usual operations such as multiplication of a vector by the NS-form are different from the standard matrix–vector multiplication. We show that (up to a fixed but arbitrary accuracy) the sparsity of the LU factorization is maintained on any finite number of scales for self-adjoint strictly elliptic operators and their inverses. Moreover, the condition number of matrices for which we compute the usual LU factorization at different scales isO(1). The direct multiresolution solver presents, therefore, an alternative to a multigrid approach and may be interpreted as a multigrid method with a single V-cycle.For self-adjoint strictly elliptic operators the multiresolution LU factorization requires onlyO((−log ϵ)2·N) operations. Combined withO(N) procedures of multiresolution forward and back substitutions, it yields a fast direct multiresolution solver. We also describe direct methods for solving matrix equations and demonstrate how to construct the inverse inO(N) operations (up to a fixed but arbitrary accuracy). We present several numerical examples which illustrate the algorithms developed in the paper. Finally, we outline several directions for generalization of our algorithms. In particular, we note that the multidimensional versions of the multiresolution LU factorization maintain sparsity, unlike the usual LU factorization

    Exponential sum approximations for tβt^{-\beta}

    Full text link
    Given β>0\beta>0 and δ>0\delta>0, the function tβt^{-\beta} may be approximated for tt in a compact interval [δ,T][\delta,T] by a sum of terms of the form weatwe^{-at}, with parameters w>0w>0 and a>0a>0. One such an approximation, studied by Beylkin and Monz\'on, is obtained by applying the trapezoidal rule to an integral representation of tβt^{-\beta}, after which Prony's method is applied to reduce the number of terms in the sum with essentially no loss of accuracy. We review this method, and then describe a similar approach based on an alternative integral representation. The main difference is that the new approach achieves much better results before the application of Prony's method; after applying Prony's method the performance of both is much the same.Comment: 18 pages, 5 figures. I have completely rewritten this paper because after uploading the previous version I realised that there is a much better approach. Note the change to the title. Have included minor corrections following revie

    Wavelet Methods in the Relativistic Three-Body Problem

    Full text link
    In this paper we discuss the use of wavelet bases to solve the relativistic three-body problem. Wavelet bases can be used to transform momentum-space scattering integral equations into an approximate system of linear equations with a sparse matrix. This has the potential to reduce the size of realistic three-body calculations with minimal loss of accuracy. The wavelet method leads to a clean, interaction independent treatment of the scattering singularities which does not require any subtractions.Comment: 14 pages, 3 figures, corrected referenc

    Approximating a Wavefunction as an Unconstrained Sum of Slater Determinants

    Full text link
    The wavefunction for the multiparticle Schr\"odinger equation is a function of many variables and satisfies an antisymmetry condition, so it is natural to approximate it as a sum of Slater determinants. Many current methods do so, but they impose additional structural constraints on the determinants, such as orthogonality between orbitals or an excitation pattern. We present a method without any such constraints, by which we hope to obtain much more efficient expansions, and insight into the inherent structure of the wavefunction. We use an integral formulation of the problem, a Green's function iteration, and a fitting procedure based on the computational paradigm of separated representations. The core procedure is the construction and solution of a matrix-integral system derived from antisymmetric inner products involving the potential operators. We show how to construct and solve this system with computational complexity competitive with current methods.Comment: 30 page

    Parallel processing area extraction and data transfer number reduction for automatic GPU offloading of IoT applications

    Full text link
    For Open IoT, we have proposed Tacit Computing technology to discover the devices that have data users need on demand and use them dynamically and an automatic GPU offloading technology as an elementary technology of Tacit Computing. However, it can improve limited applications because it only optimizes parallelizable loop statements extraction. Thus, in this paper, to improve performances of more applications automatically, we propose an improved method with reduction of data transfer between CPU and GPU. We evaluate our proposed offloading method by applying it to Darknet and find that it can process it 3 times as quickly as only using CPU.Comment: 6 pages, 4 figures, in Japanese, IEICE Technical Report, SC2018-3
    corecore